Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
36th IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2022 ; : 338-345, 2022.
Article in English | Scopus | ID: covidwho-2018898

ABSTRACT

Teaching High-Performance Computing (HPC) to undergraduate programs represents a significant challenge in most universities in developing countries like Mexico. Deficien-cies in the required infrastructure and equipment, inadequate curricula in computer engineering programs (and resistance to change them), students' lack of interest, motivation, or knowledge of this area are the main difficulties to overcome. The COVID-19 pandemic represents an additional challenge to these difficulties in teaching HPC in these programs. Despite the detriments, some strategies have been developed to incorporate the HPC concepts to Mexican students without necessarily modifying the traditional curricula. This paper presents a case study over four public universities in Mexico based on our experience as instructors. We also propose a course that introduces the HPC principles considering the heterogeneous background of the students in such universities. The results are about the number of students enrolling in related classes and participating in extra-curricular projects. © 2022 IEEE.

2.
Turkish Journal of Electrical Engineering & Computer Sciences ; 30(4):1571-1585, 2022.
Article in English | Academic Search Complete | ID: covidwho-1955673

ABSTRACT

In recent years, the need for the ability to work remotely and subsequently the need for the availability of remote computer-based systems has increased substantially. This trend has seen a dramatic increase with the onset of the 2020 pandemic. Often local data is produced, stored, and processed in the cloud to remedy this flood of computation and storage needs. Historically, HPC (high performance computing) and the concept of big data have been utilized for the storage and processing of large data. However, both HPC and Hadoop can be utilized as solutions for analytical work, though the differences between these may not be obvious. Both use parallel processing techniques and offer options for data to be stored in either a centralized or distributed manner. Recent studies have focused on using a hybrid approach with both technologies. Therefore, the convergence between HPC and big data technologies can be filled with distributed computing machines at the layer described. This paper results from the motivation that there exists a necessity for a distributed computing framework that can scale from SOC (system on chip) boards to desktop computers and servers. For this purpose, in this article, we propose a distributed computing environment that can scale up to devices with heterogeneous architecture, where devices can set up clusters with resource-limited nodes and then run on top of. The solution can be thought of as a minimalist hybrid approach between HPC and big data. Within the scope of this study, not only the design of the proposed system is detailed, but also critical modules and subsystems are implemented as proof of concept. [ FROM AUTHOR] Copyright of Turkish Journal of Electrical Engineering & Computer Sciences is the property of Scientific and Technical Research Council of Turkey and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.)

3.
2nd International Conference on Machine Learning, Internet of Things and Big Data, ICMIB 2021 ; 431:371-380, 2022.
Article in English | Scopus | ID: covidwho-1872365

ABSTRACT

CT scans are proved to be one of the best ways to identify the presence of COVID-19 and diagnose the patients. To reduce the workload on doctors, many researchers have come up with automatic classification techniques. However, all the research was done to improve the proposed model’s accuracy. To improve the reliability and robustness of predictions, recent work [1] focused on allotting weightage to predictions. This paper aims at improving the performance of the existing one in both reliability and computational terms. We compare different metrics’ ability to compute the weights of the base models and show the computational time improvement by parallelism. The proposed approach achieved about 4.2 speedup and an efficiency of around 60%. The proposed parallelizable approach works best when a bulk of test samples are to be tested to reduce the total testing time. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

4.
International Journal of Advanced Computer Science and Applications ; 12(8), 2021.
Article in English | ProQuest Central | ID: covidwho-1835996

ABSTRACT

Information diffusion in the social network has been widely used in many fields today, from online marketing, e-government campaigns to predicting large social events. Some study focuses on how to discover a method to accelerate the parameter calculation for the information diffusion forecast in order to improve the efficiency of the information diffusion problem. The Betweenness Centrality is a significant indicator to identify the important people on social networks that should be aimed to maximize information diffusion. Thus, in this paper, we propose the RED-BET method to improve the information diffusion on social networks by a hybrid approach that allows to quickly determine the nodes having high Betweenness Centrality. Our main idea in the proposed method combines both the graph reduction and parallelization of the Betweenness Centrality calculation. Experimental results with the currently popular large datasets of SNAP and Animer have demonstrated that our proposed method improves the performance from 1.2 to 1.41 times compared to the TeexGraph toolkit, from 1.76 to 2.55 times than the NetworKit, and from 1.05 to 1.1 times in comparison with the bigGraph toolkit.

5.
Genes (Basel) ; 13(4)2022 04 13.
Article in English | MEDLINE | ID: covidwho-1785609

ABSTRACT

Several variants of the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) are emerging all over the world. Variant surveillance from genome sequencing has become crucial to determine if mutations in these variants are rendering the virus more infectious, potent, or resistant to existing vaccines and therapeutics. Meanwhile, analyzing many raw sequencing data repeatedly with currently available code-based bioinformatics tools is tremendously challenging to be implemented in this unprecedented pandemic time due to the fact of limited experts and computational resources. Therefore, in order to hasten variant surveillance efforts, we developed an installation-free cloud workflow for robust mutation profiling of SARS-CoV-2 variants from multiple Illumina sequencing data. Herein, 55 raw sequencing data representing four early SARS-CoV-2 variants of concern (Alpha, Beta, Gamma, and Delta) from an open-access database were used to test our workflow performance. As a result, our workflow could automatically identify mutated sites of the variants along with reliable annotation of the protein-coding genes at cost-effective and timely manner for all by harnessing parallel cloud computing in one execution under resource-limitation settings. In addition, our workflow can also generate a consensus genome sequence which can be shared with others in public data repositories to support global variant surveillance efforts.


Subject(s)
COVID-19 , SARS-CoV-2 , COVID-19/genetics , High-Throughput Nucleotide Sequencing , Humans , Mutation , SARS-CoV-2/genetics , Spike Glycoprotein, Coronavirus/genetics , Workflow
6.
International Journal of Advances in Soft Computing and its Applications ; 13(3):170-180, 2021.
Article in English | Scopus | ID: covidwho-1589406

ABSTRACT

Like many countries, Jordan has resorted to lockdown in an attempt to contain the outbreak of Coronavirus (Covid-19). A set of precautions such as quarantines, isolations, and social distancing were taken in order to tackle its rapid spread of Covid-19. However, the authorities were facing a serious issue with enforcing quarantine instructions and social distancing among its people. In this paper, a social distancing mentoring system has been designed to alert the authorities if any of the citizens violated the quarantine instructions and to detect the crowds and measure their social distancing using an object tracking technique that works in real-time base. This system utilises the widespread surveillance cameras that already exist in public places and outside many residential buildings. To ensure the effectiveness of this approach, the system uses cameras deployed on the campus of Al-Zaytoonah University of Jordan. The results showed the efficiency of this system in tracking people and determining the distances between them in accordance with public safety instructions. This work is the first approach to handle the classification challenges for moving objects using a shared-memory model of multicore techniques. © Al-Zaytoonah University of Jordan (ZUJ).

7.
Brief Bioinform ; 23(1)2022 01 17.
Article in English | MEDLINE | ID: covidwho-1545908

ABSTRACT

MOTIVATION: Understanding chemical-gene interactions (CGIs) is crucial for screening drugs. Wet experiments are usually costly and laborious, which limits relevant studies to a small scale. On the contrary, computational studies enable efficient in-silico exploration. For the CGI prediction problem, a common method is to perform systematic analyses on a heterogeneous network involving various biomedical entities. Recently, graph neural networks become popular in the field of relation prediction. However, the inherent heterogeneous complexity of biological interaction networks and the massive amount of data pose enormous challenges. This paper aims to develop a data-driven model that is capable of learning latent information from the interaction network and making correct predictions. RESULTS: We developed BioNet, a deep biological networkmodel with a graph encoder-decoder architecture. The graph encoder utilizes graph convolution to learn latent information embedded in complex interactions among chemicals, genes, diseases and biological pathways. The learning process is featured by two consecutive steps. Then, embedded information learnt by the encoder is then employed to make multi-type interaction predictions between chemicals and genes with a tensor decomposition decoder based on the RESCAL algorithm. BioNet includes 79 325 entities as nodes, and 34 005 501 relations as edges. To train such a massive deep graph model, BioNet introduces a parallel training algorithm utilizing multiple Graphics Processing Unit (GPUs). The evaluation experiments indicated that BioNet exhibits outstanding prediction performance with a best area under Receiver Operating Characteristic (ROC) curve of 0.952, which significantly surpasses state-of-theart methods. For further validation, top predicted CGIs of cancer and COVID-19 by BioNet were verified by external curated data and published literature.


Subject(s)
Computational Biology , Computer Simulation , Models, Biological , Neural Networks, Computer
8.
Elife ; 102021 10 15.
Article in English | MEDLINE | ID: covidwho-1518778

ABSTRACT

Simulating nationwide realistic individual movements with a detailed geographical structure can help optimise public health policies. However, existing tools have limited resolution or can only account for a limited number of agents. We introduce Epidemap, a new framework that can capture the daily movement of more than 60 million people in a country at a building-level resolution in a realistic and computationally efficient way. By applying it to the case of an infectious disease spreading in France, we uncover hitherto neglected effects, such as the emergence of two distinct peaks in the daily number of cases or the importance of local density in the timing of arrival of the epidemic. Finally, we show that the importance of super-spreading events strongly varies over time.


Subject(s)
COVID-19/epidemiology , Communicable Diseases/epidemiology , Epidemics/statistics & numerical data , Geography/methods , Public Health/methods , France/epidemiology , Humans , Public Health/instrumentation , Spatial Analysis
9.
Front Chem ; 9: 750325, 2021.
Article in English | MEDLINE | ID: covidwho-1518465

ABSTRACT

Ultra-large-scale molecular docking can improve the accuracy of lead compounds in drug discovery. In this study, we developed a molecular docking piece of software, Vina@QNLM, which can use more than 4,80,000 parallel processes to search for potential lead compounds from hundreds of millions of compounds. We proposed a task scheduling mechanism for large-scale parallelism based on Vinardo and Sunway supercomputer architecture. Then, we readopted the core docking algorithm to incorporate the full advantage of the heterogeneous multicore processor architecture in intensive computing. We successfully expanded it to 10, 465, 065 cores (1,61,001 management process elements and 0, 465, 065 computing process elements), with a strong scalability of 55.92%. To the best of our knowledge, this is the first time that 10 million cores are used for molecular docking on Sunway. The introduction of the heterogeneous multicore processor architecture achieved the best speedup, which is 11x more than that of the management process element of Sunway. The performance of Vina@QNLM was comprehensively evaluated using the CASF-2013 and CASF-2016 protein-ligand benchmarks, and the screening power was the highest out of the 27 pieces of software tested in the CASF-2013 benchmark. In some existing applications, we used Vina@QNLM to dock more than 10 million molecules to nine rigid proteins related to SARS-CoV-2 within 8.5 h on 10 million cores. We also developed a platform for the general public to use the software.

10.
Comput Math Organ Theory ; : 1-19, 2021 Mar 30.
Article in English | MEDLINE | ID: covidwho-1168994

ABSTRACT

A small number of individuals infected within a community can lead to the rapid spread of the disease throughout that community, leading to an epidemic outbreak. This is even more true for highly contagious diseases such as COVID-19, known to be caused by the new coronavirus SARS-CoV-2. Mathematical models of epidemics allow estimating several impacts on the population and, therefore, are of great use for the definition of public health policies. Some of these measures include the isolation of the infected (also known as quarantine), and the vaccination of the susceptible. In a possible scenario in which a vaccine is available, but with limited access, it is necessary to quantify the levels of vaccination to be applied, taking into account the continued application of preventive measures. This work concerns the simulation of the spread of the COVID-19 disease in a community by applying the Monte Carlo method to a Susceptible-Exposed-Infective-Recovered (SEIR) stochastic epidemic model. To handle the computational effort involved, a simple parallelization approach was adopted and deployed in a small HPC cluster. The developed computational method allows to realistically simulate the spread of COVID-19 in a medium-sized community and to study the effect of preventive measures such as quarantine and vaccination. The results show that an effective combination of vaccination with quarantine can prevent the appearance of major epidemic outbreaks, even if the critical vaccination coverage is not reached.

SELECTION OF CITATIONS
SEARCH DETAIL